8 research outputs found

    Uncertainty-Aware Principal Component Analysis

    Full text link
    We present a technique to perform dimensionality reduction on data that is subject to uncertainty. Our method is a generalization of traditional principal component analysis (PCA) to multivariate probability distributions. In comparison to non-linear methods, linear dimensionality reduction techniques have the advantage that the characteristics of such probability distributions remain intact after projection. We derive a representation of the PCA sample covariance matrix that respects potential uncertainty in each of the inputs, building the mathematical foundation of our new method: uncertainty-aware PCA. In addition to the accuracy and performance gained by our approach over sampling-based strategies, our formulation allows us to perform sensitivity analysis with regard to the uncertainty in the data. For this, we propose factor traces as a novel visualization that enables to better understand the influence of uncertainty on the chosen principal components. We provide multiple examples of our technique using real-world datasets. As a special case, we show how to propagate multivariate normal distributions through PCA in closed form. Furthermore, we discuss extensions and limitations of our approach

    Quantitative Methods for Uncertainty Visualization

    No full text
    Uncertainty is ubiquitous in the data that we collect. Nevertheless, when users create visualizations of this data, it is frequently neglected. The reason for this is twofold: For one, many common algorithms cannot handle uncertain data. If this is the case, the only option is to omit information and solely consider the most likely realization of the data. The second reason is that uncertainty is difficult to communicate to the user, either due to the lack of suitable visual variables or because users lack literacy in understanding uncertainty and its underlying mathematical model: probability distributions. The following thesis proposes methods to alleviate some of these problems by tackling two research questions: "How can we communicate uncertainty with its statistical properties?" and "How to adapt visualization methods to uncertainty?" First, we discuss sources of uncertainty, how to model it by using probability distributions, and different approaches for propagating uncertainty. Then, we propose a novel treemap technique designed to show uncertainty information. Our method relaxes the requirement of covering the entire designated space that traditional techniques adhere to. We propose modulated sine waves as a quantitative encoding of uncertainty, yet our resulting method is flexible to work with various visual variables. Next, we investigate how to perform dimensionality reduction on uncertainty data. We identify two general approaches: Monte Carlo sampling and analytical methods. We apply the former to adapt stress-majorization for creating layouts of probabilistic graphs. While Monte Carlo methods can be applied to a wide range of problems, the resulting visualizations can be difficult to interpret. On the other hand, analytical approaches do not share this drawback but are only viable if the uncertainty information can be propagated analytically through the projection. We show how this can be done to arrive at an uncertainty-aware version of principal component analysis. Besides, the analytical approach allows us to understand the projection's sensitivity to uncertainty in the data. Together with a summary of the developed methods, this thesis concludes with potential directions for future research. For this, we discuss Bayesian methods and their potential applications for handling uncertainty in visualization. Furthermore, we propose stippling, a form of visual abstraction, as a new way to visualize uncertainty in scalar fields.publishe

    Stippling of 2D Scalar Fields

    No full text
    We propose a technique to represent two-dimensional data using stipples. While stippling is often regarded as an illustrative method, we argue that it is worth investigating its suitability for the visualization domain. For this purpose, we generalize the Linde-Buzo-Gray stippling algorithm for information visualization purposes to encode continuous and discrete 2D data. Our proposed modifications provide more control over the resulting distribution of stipples for encoding additional information into the representation, such as contours. We show different approaches to depict contours in stipple drawings based on locally adjusting the stipple distribution. Combining stipple-based gradients and contours allows for simultaneous assessment of the overall structure of the data while preserving important local details. We discuss the applicability of our technique using datasets from different domains and conduct observation-validating studies to assess the perception of stippled representations.publishe

    Predicting intent behind selections in scatterplot visualizations

    No full text
    Predicting and capturing an analyst’s intent behind a selection in a data visualization is valuable in two scenarios: First, a successful prediction of a pattern an analyst intended to select can be used to auto-complete a partial selection which, in turn, can improve the correctness of the selection. Second, knowing the intent behind a selection can be used to improve recall and reproducibility. In this paper, we introduce methods to infer analyst’s intents behind selections in data visualizations, such as scatterplots. We describe intents based on patterns in the data, and identify algorithms that can capture these patterns. Upon an interactive selection, we compare the selected items with the results of a large set of computed patterns, and use various ranking approaches to identify the best pattern for an analyst’s selection. We store annotations and the metadata to reconstruct a selection, such as the type of algorithm and its parameterization, in a provenance graph. We present a prototype system that implements these methods for tabular data and scatterplots. Analysts can select a prediction to auto-complete partial selections and to seamlessly log their intents. We discuss implications of our approach for reproducibility and reuse of analysis workflows. We evaluate our approach in a crowd-sourced study, where we show that auto-completing selection improves accuracy, and that we can accurately capture pattern-based intent.publishe

    Predicting Intent Behind Selections in Scatterplot Visualizations

    No full text
    Predicting and capturing an analyst’s intent behind a selection in a data visualization is valuable in two scenarios: First, a successful prediction of a pattern an analyst intended to select can be used to auto-complete a partial selection which, in turn, can improve the correctness of the selection. Second, knowing the intent behind a selection can be used to improve recall and reproducibility. In this paper, we introduce methods to infer analyst's intents behind selections in data visualizations, such as scatterplots. We describe intents based on patterns in the data, and identify algorithms that can capture these patterns. Upon an interactive selection, we compare the selected items with the results of a large set of computed patterns, and use various ranking approaches to identify the best pattern for an analyst's selection. We store annotations and the metadata to reconstruct a selection, such as the type of algorithm and its parameterization, in a provenance graph. We present a prototype system that implements these methods for tabular data and scatterplots. Analysts can select a prediction to auto-complete partial selections and to seamlessly log their intents. We discuss implications of our approach for reproducibility and reuse of analysis workflows. We evaluate our approach in a crowd-sourced study, where we show that auto-completing selection improves accuracy, and that we can accurately capture pattern-based intent

    Superpixel-based structure classification for laparoscopic surgery

    No full text
    Minimally-invasive interventions offers multiple benefits for patients, but also entails drawbacks for the surgeon. The goal of context-aware assistance systems is to alleviate some of these difficulties. Localizing and identifying anatomical structures, maligned tissue and surgical instruments through endoscopic image analysis is paramount for an assistance system, making online measurements and augmented reality visualizations possible. Furthermore, such information can be used to assess the progress of an intervention, hereby allowing for a context-aware assistance. In this work, we present an approach for such an analysis. First, a given laparoscopic image is divided into groups of connected pixels, so-called superpixels, using the SEEDS algorithm. The content of a given superpixel is then described using information regarding its color and texture. Using a Random Forest classifier, we determine the class label of each superpixel. We evaluated our approach on a publicly available dataset for laparoscopic instrument detection and achieved a DICE score of 0.69.publishe

    Context-aware Augmented Reality in laparoscopic surgery

    No full text
    Augmented Reality is a promising paradigm for intraoperative assistance. Yet, apart from technical issues, a major obstacle to its clinical application is the man-machine interaction. Visualization of unnecessary, obsolete or redundant information may cause confusion and distraction, reducing usefulness and acceptance of the assistance system.We propose a system capable of automatically filtering available information based on recognized phases in the operating room. Our system offers a specific selection of available visualizations which suit the surgeon's needs best. The system was implemented for use in laparoscopic liver and gallbladder surgery and evaluated in phantom experiments in conjunction with expert interviews.publishe
    corecore